Goto

Collaborating Authors

 Robots in the Workplace


Robot Talk Episode 113 – Soft robotic hands, with Kaspar Althoefer

Robohub

Kaspar Althoefer is Director of the Centre for Advanced Robotics at Queen Mary University of London (QMUL). His research focuses on soft robotics, tactile perception, intelligent manipulation, and machine learning techniques for sensor signal interpretation. His research advancements have significant applications in robot-assisted minimally invasive surgery, rehabilitation, assistive technologies, and human-robot interactions within a range of scenarios, including manufacturing. Before joining QMUL, he was a Professor at King's College London, where he also earned his PhD.


CAFEs: Cable-driven Collaborative Floating End-Effectors for Agriculture Applications

arXiv.org Artificial Intelligence

CAFEs (Collaborative Agricultural Floating End-effectors) is a new robot design and control approach to automating large-scale agricultural tasks. Based upon a cable driven robot architecture, by sharing the same roller-driven cable set with modular robotic arms, a fast-switching clamping mechanism allows each CAFE to clamp onto or release from the moving cables, enabling both independent and synchronized movement across the workspace. The methods developed to enable this system include the mechanical design, precise position control and a dynamic model for the spring-mass liked system, ensuring accurate and stable movement of the robotic arms. The system's scalability is further explored by studying the tension and sag in the cables to maintain performance as more robotic arms are deployed. Experimental and simulation results demonstrate the system's effectiveness in tasks including pick-and-place showing its potential to contribute to agricultural automation.


MRS-CWC: A Weakly Constrained Multi-Robot System with Controllable Constraint Stiffness for Mobility and Navigation in Unknown 3D Rough Environments

arXiv.org Artificial Intelligence

Navigating unknown three-dimensional (3D) rugged environments is challenging for multi-robot systems. Traditional discrete systems struggle with rough terrain due to limited individual mobility, while modular systems--where rigid, controllable constraints link robot units--improve traversal but suffer from high control complexity and reduced flexibility. To address these limitations, we propose the Multi-Robot System with Controllable Weak Constraints (MRS-CWC), where robot units are connected by constraints with dynamically adjustable stiffness. This adaptive mechanism softens or stiffens in real-time during environmental interactions, ensuring a balance between flexibility and mobility. We formulate the system's dynamics and control model and evaluate MRS-CWC against six baseline methods and an ablation variant in a benchmark dataset with 100 different simulation terrains. Results show that MRS-CWC achieves the highest navigation completion rate and ranks second in success rate, efficiency, and energy cost in the highly rugged terrain group, outperforming all baseline methods without relying on environmental modeling, path planning, or complex control. Even where MRS-CWC ranks second, its performance is only slightly behind a more complex ablation variant with environmental modeling and path planning. Finally, we develop a physical prototype and validate its feasibility in a constructed rugged environment. For videos, simulation benchmarks, and code, please visit https://wyd0817.github.io/project-mrs-cwc/.


A Novel Control Strategy for Offset Points Tracking in the Context of Agricultural Robotics

arXiv.org Artificial Intelligence

In this paper, we present a novel method to control a rigidly connected location on the vehicle, such as a point on the implement in case of agricultural tasks. Agricultural robots are transforming modern farming by enabling precise and efficient operations, replacing humans in arduous tasks while reducing the use of chemicals. Traditionnaly, path_following algorithms are designed to guide the vehicle's center along a predefined trajetory. However, since the actual agronomic task is performed by the implement, it is essential to control a specific point on the implement itself rather than vehicle's center. As such, we present in this paper two approaches for achieving the control of an offset point on the robot. The first approach adapts existing control laws, initially inteded for rear axle's midpoint, to manage the desired lateral deviation. The second approach employs backstepping control techniques to create a control law that directly targets the implement. We conduct real-world experiments, highlighting the limitations of traditional approaches for offset points control, and demonstrating the strengths and weaknesses of the proposed methods.


Scalable Real2Sim: Physics-Aware Asset Generation Via Robotic Pick-and-Place Setups

arXiv.org Artificial Intelligence

Simulating object dynamics from real-world perception shows great promise for digital twins and robotic manipulation but often demands labor-intensive measurements and expertise. We present a fully automated Real2Sim pipeline that generates simulation-ready assets for real-world objects through robotic interaction. Using only a robot's joint torque sensors and an external camera, the pipeline identifies visual geometry, collision geometry, and physical properties such as inertial parameters. Our approach introduces a general method for extracting high-quality, object-centric meshes from photometric reconstruction techniques (e.g., NeRF, Gaussian Splatting) by employing alpha-transparent training while explicitly distinguishing foreground occlusions from background subtraction. We validate the full pipeline through extensive experiments, demonstrating its effectiveness across diverse objects. By eliminating the need for manual intervention or environment modifications, our pipeline can be integrated directly into existing pick-and-place setups, enabling scalable and efficient dataset creation.


Robotic Automation in Apparel Manufacturing: A Novel Approach to Fabric Handling and Sewing

arXiv.org Artificial Intelligence

Sewing garments using robots has consistently posed a research challenge due to the inherent complexities in fabric manipulation. In this paper, we introduce an intelligent robotic automation system designed to address this issue. By employing a patented technique that temporarily stiffens garments, we eliminate the traditional necessity for fabric modeling. Our methodological approach is rooted in a meticulously designed three-stage pipeline: first, an accurate pose estimation of the cut fabric pieces; second, a procedure to temporarily join fabric pieces; and third, a closed-loop visual servoing technique for the sewing process. Demonstrating versatility across various fabric types, our approach has been successfully validated in practical settings, notably with cotton material at the Bluewater Defense production line and denim material at Levi's research facility. The techniques described in this paper integrate robotic mechanisms with traditional sewing machines, devising a real-time sewing algorithm, and providing hands-on validation through a collaborative robot setup.


HI-GVF: Shared Control based on Human-Influenced Guiding Vector Fields for Human-multi-robot Cooperation

arXiv.org Artificial Intelligence

Human-multi-robot shared control leverages human decision-making and robotic autonomy to enhance human-robot collaboration. While widely studied, existing systems often adopt a leader-follower model, limiting robot autonomy to some extent. Besides, a human is required to directly participate in the motion control of robots through teleoperation, which significantly burdens the operator. To alleviate these two issues, we propose a layered shared control computing framework using human-influenced guiding vector fields (HI-GVF) for human-robot collaboration. HI-GVF guides the multi-robot system along a desired path specified by the human. Then, an intention field is designed to merge the human and robot intentions, accelerating the propagation of the human intention within the multi-robot system. Moreover, we give the stability analysis of the proposed model and use collision avoidance based on safety barrier certificates to fine-tune the velocity. Eventually, considering the firefighting task as an example scenario, we conduct simulations and experiments using multiple human-robot interfaces (brain-computer interface, myoelectric wristband, eye-tracking), and the results demonstrate that our proposed approach boosts the effectiveness and performance of the task.


Global-Local Interface for On-Demand Teleoperation

arXiv.org Artificial Intelligence

Teleoperation is a critical method for human-robot interface, holds significant potential for enabling robotic applications in industrial and unstructured environments. Existing teleoperation methods have distinct strengths and limitations in flexibility, range of workspace and precision. To fuse these advantages, we introduce the Global-Local (G-L) Teleoperation Interface. This interface decouples robotic teleoperation into global behavior, which ensures the robot motion range and intuitiveness, and local behavior, which enhances human operator's dexterity and capability for performing fine tasks. The G-L interface enables efficient teleoperation not only for conventional tasks like pick-and-place, but also for challenging fine manipulation and large-scale movements. Based on the G-L interface, we constructed a single-arm and a dual-arm teleoperation system with different remote control devices, then demonstrated tasks requiring large motion range, precise manipulation or dexterous end-effector control. Extensive experiments validated the user-friendliness, accuracy, and generalizability of the proposed interface.


Data efficient Robotic Object Throwing with Model-Based Reinforcement Learning

arXiv.org Artificial Intelligence

Pick-and-place (PnP) operations, featuring object grasping and trajectory planning, are fundamental in industrial robotics applications. Despite many advancements in the field, PnP is limited by workspace constraints, reducing flexibility. Pick-and-throw (PnT) is a promising alternative where the robot throws objects to target locations, leveraging extrinsic resources like gravity to improve efficiency and expand the workspace. However, PnT execution is complex, requiring precise coordination of high-speed movements and object dynamics. Solutions to the PnT problem are categorized into analytical and learning-based approaches. Analytical methods focus on system modeling and trajectory generation but are time-consuming and offer limited generalization. Learning-based solutions, in particular Model-Free Reinforcement Learning (MFRL), offer automation and adaptability but require extensive interaction time. This paper introduces a Model-Based Reinforcement Learning (MBRL) framework, MC-PILOT, which combines data-driven modeling with policy optimization for efficient and accurate PnT tasks. MC-PILOT accounts for model uncertainties and release errors, demonstrating superior performance in simulations and real-world tests with a Franka Emika Panda manipulator. The proposed approach generalizes rapidly to new targets, offering advantages over analytical and Model-Free methods.


Knowledge in multi-robot systems: an interplay of dynamics, computation and communication

arXiv.org Artificial Intelligence

We show that the hybrid systems perspective of distributed multi-robot systems is compatible with logical models of knowledge already used in distributed computing, and demonstrate its usefulness by deriving sufficient epistemic conditions for exploration and gathering robot tasks to be solvable. We provide a separation of the physical and computational aspects of a robotic system, allowing us to decouple the problems related to each and directly use methods from control theory and distributed computing, fields that are traditionally distant in the literature. Finally, we demonstrate a novel approach for reasoning about the knowledge in multi-robot systems through a principled method of converting a switched hybrid dynamical system into a temporal-epistemic logic model, passing through an abstract state machine representation. This creates space for methods and results to be exchanged across the fields of control theory, distributed computing and temporal-epistemic logic, while reasoning about multi-robot systems.